我们介绍了贴片采样时间表(PSS)的概念,该概念在训练过程中每批次使用的视觉变压器(VIT)贴片的数量变化。由于对于大多数视觉目标(例如,分类),所有补丁都不同样重要,因此我们认为,不太重要的补丁可以用于较少的训练迭代中,从而导致较短的训练时间,对性能的影响最小。此外,我们观察到,使用PSS的训练可以使VIT在推理过程中对更宽的贴片采样范围更强。这允许在推理过程中进行吞吐量和准确性之间的细粒度,动态的权衡。我们使用PSSS在VIT上评估Imagenet的VIT,均通过从头开始训练并使用重建损耗函数进行了预训练。对于预训练的模型,与使用所有斑块相比,我们的分类准确性降低了0.26%(从25小时到17小时)降低了0.26%。代码,模型检查点和日志可在https://github.com/bradmcdanel/pss上找到。
translated by 谷歌翻译
Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications.To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network.
translated by 谷歌翻译
A new method for solving the wave equation is presented, called the learned Born series (LBS), which is derived from a convergent Born Series but its components are found through training. The LBS is shown to be significantly more accurate than the convergent Born series for the same number of iterations, in the presence of high contrast scatterers, while maintaining a comparable computational complexity. The LBS is able to generate a reasonable prediction of the global pressure field with a small number of iterations, and the errors decrease with the number of learned iterations.
translated by 谷歌翻译
Actively monitoring machine learning models during production operations helps ensure prediction quality and detection and remediation of unexpected or undesired conditions. Monitoring models already deployed in big data environments brings the additional challenges of adding monitoring in parallel to the existing modelling workflow and controlling resource requirements. In this paper, we describe (1) a framework for monitoring machine learning models; and, (2) its implementation for a big data supply chain application. We use our implementation to study drift in model features, predictions, and performance on three real data sets. We compare hypothesis test and information theoretic approaches to drift detection in features and predictions using the Kolmogorov-Smirnov distance and Bhattacharyya coefficient. Results showed that model performance was stable over the evaluation period. Features and predictions showed statistically significant drifts; however, these drifts were not linked to changes in model performance during the time of our study.
translated by 谷歌翻译
Although prediction models for delirium, a commonly occurring condition during general hospitalization or post-surgery, have not gained huge popularity, their algorithmic bias evaluation is crucial due to the existing association between social determinants of health and delirium risk. In this context, using MIMIC-III and another academic hospital dataset, we present some initial experimental evidence showing how sociodemographic features such as sex and race can impact the model performance across subgroups. With this work, our intent is to initiate a discussion about the intersectionality effects of old age, race and socioeconomic factors on the early-stage detection and prevention of delirium using ML.
translated by 谷歌翻译
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks. Several GNN frameworks have then been developed for fast and easy implementation of GNN models. Despite their popularity, they are not well documented, and their implementations and system performance have not been well understood. In particular, unlike the traditional GNNs that are trained based on the entire graph in a full-batch manner, recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs. While they improve the scalability, their training times still depend on the implementations in the frameworks as sampling and its associated operations can introduce non-negligible overhead and computational cost. In addition, it is unknown how much the frameworks are 'eco-friendly' from a green computing perspective. In this paper, we provide an in-depth study of two mainstream GNN frameworks along with three state-of-the-art GNNs to analyze their performance in terms of runtime and power/energy consumption. We conduct extensive benchmark experiments at several different levels and present detailed analysis results and observations, which could be helpful for further improvement and optimization.
translated by 谷歌翻译
Proteins play a central role in biology from immune recognition to brain activity. While major advances in machine learning have improved our ability to predict protein structure from sequence, determining protein function from structure remains a major challenge. Here, we introduce Holographic Convolutional Neural Network (H-CNN) for proteins, which is a physically motivated machine learning approach to model amino acid preferences in protein structures. H-CNN reflects physical interactions in a protein structure and recapitulates the functional information stored in evolutionary data. H-CNN accurately predicts the impact of mutations on protein function, including stability and binding of protein complexes. Our interpretable computational model for protein structure-function maps could guide design of novel proteins with desired function.
translated by 谷歌翻译
套件是指准备和分组必要的零件和工具(或“套件”)以在制造环境中组装。自动化此过程可简化人工工人的组装任务,并提高效率。现有的自动化套件系统遵守脚本指示和预定义的启发式方法。但是,鉴于零件和逻辑延迟的可用性差异,现有系统的僵化性可以限制装配线的整体效率。在本文中,我们提出了一个双重优化框架,以使机器人能够执行基于任务分割的零件选择,套件布置和交付计划,以及时提供定制的套件 - 即在需要时正确。我们通过人类主题研究(n = 18)评估了提出的方法,涉及基于研究的数据构建平板家具桌和购物流仿真。我们的结果表明,与使用由任务图本身定义的刚性任务分割边界定义的基线方法相比,与基线方法相比,与基线方法相比,即将到来的套件系统更有效,对上游商店流量延迟有弹性,并且比较更好地优选。单个套件,包括组装单个单元所需的所有零件。
translated by 谷歌翻译
图像引导放射疗法中的CBCT为患者的设置和计划评估提供了关键的解剖学信息。纵向CBCT图像登记可以量化分裂间的解剖变化。这项研究的目的是提出一个无监督的基于深度学习的CBCT-CBCT变形图像登记。提出的可变形注册工作流程包括训练和推理阶段,这些培训和推理阶段通过基于空间转换的网络(STN)共享相同的进率前路。 STN由全球生成对抗网络(Globalgan)和本地GAN(Localgan)组成,分别预测了粗略和细尺度运动。通过最小化图像相似性损失和可变形矢量场(DVF)正则化损失,而无需监督地面真实DVF的训练,对网络进行了训练。在推理阶段,训练有素的Localgan预测了局部DVF的斑块,并融合形成全图像DVF。随后将局部全图像DVF与Globalgan生成的DVF合并以获得最终的DVF。在实验中,使用来自20名腹部癌症患者的100个分数CBCT评估了该方法,并在保持测试中来自21名不同腹部癌症患者的队列中的105个分数CBCT。从定性上讲,注册结果显示了变形的CBCT图像与目标CBCT图像之间的对齐。定量地,在基准标记和手动确定的地标计算的平均目标注册误差(TRE)为1.91+-1.11 mm。变形CBCT和目标CBCT之间的平均平均绝对误差(MAE),归一化的跨相关性(NCC)分别为33.42+-7.48 HU,0.94+-0.04。这种有希望的注册方法可以提供快速准确的纵向CBCT对准,以促进分流的解剖变化分析和预测。
translated by 谷歌翻译
我们实施了两个不同的三维深度学习神经网络,并评估了它们在非对比度计算机断层扫描(CT)上看到的颅内出血(ICH)的能力。一种模型,称为“沿正交关注u-net沿正交级别的素隔离”(Viola-Unet),其体系结构元素可适应2022年实例的数据挑战。第二个比较模型是从No-New U-NET(NNU-NET)得出的。输入图像和地面真理分割图用于以监督方式分别训练两个网络。验证数据随后用于半监督培训。在5倍交叉验证期间比较了模型预测。中提琴 - UNET的表现优于四个性能指标中的两个(即NSD和RVD)的比较网络。将中提琴和NNU-NET网络组合的合奏模型在DSC和HD方面的性能最高。我们证明,与3D U-NET相关的ICH分割性能优势有效地合并了U-NET的解码分支期间的空间正交特征。 Viola-Unet AI工具的代码基础,预估计的权重和Docker图像将在https://github.com/samleoqh/viola-unet上公开获得。
translated by 谷歌翻译